Should We Fear Artificial Superintelligence? A Response

3 min read

Artificial Superintelligence

February 23, 2019 John Loeffler posted an interesting article on interestingengineering.com. The post is called “Should We Fear Artificial Superintelligence?”, and in it, Loeffler argues that while Artificial Superintelligence (ASI) can be dangerous, one should mostly be optimistic about it. As a futurist, I am concerned about the possibility of (near-term) human extinction, and I think ASI is one of the greatest dangers we face. Therefore, I appreciate it when people think about this, as much of humanity’s concern seems to go to relatively unimportant things. But while Loeffler and I are both optimistic, we are so for different reasons, and the way he presents his case for optimism concerns me deeply.

What is Artificial Superintelligence?

Feel free to skip this part if you’re familiar with the term Artificial Superintelligence (ASI). An ASI is defined as any non-biological intelligence that is (much) more intelligent than even Albert Einstein (or whoever your favorite genius is). Intelligence, then, can be defined as an agent’s ability to achieve goals in a wide range of environments. Here, an agent can be a human, another animal or a machine. An ASI is then simply a machine that can at least reach any goals humans can in their environments, each one at least as well as humans can, and a number of them (far) better.

Why Care About Artificial Superintelligence?

This is quite simple: an ASI will by definition be very good at reaching its goal (or goals, but for simplicity, I’ll assume it’s one goal). This goal itself could be extremely good or bad for mankind, but the ASI’s ways in achieving its goal could have a similar effect. Nick Bostrom explained the darker side of ASI with a thought experiment called the Paperclip Maximizer. In it, we have an Artificial Intelligence which has the goal of maximizing the number of paperclips in the universe. In order to reach this goal, it might very well seek to improve its own intelligence in order to be better at maximizing the number of paperclips. The result will be an ASI that is extremely skilled in making huge numbers of paperclips. So skilled, mind you, that it transforms Earth into a paperclip factory in order to create even more paperclips. By more and more fulfilling its seemingly innocent goal of manufacturing more and more paperclips, the ASI has caused the extinction of mankind. Since it wasn’t programmed with any of our morals, it didn’t know that human extinction is a bad thing.

Artificial Superintelligence

As with any technology, there is also a positive side, and in the case of ASI, it’s huge. Imagine an ASI that does share our morals. It understands that death is generally bad, health is a good thing, etc. Programmed with the right goals, such intelligence might easily find a cure for cancer, solve poverty and war, and invent human immortality. It would find a way to solve the global warming problem and invent new technologies to explore space. Humanity would thrive like never before.

Loeffler’s Case for Optimism

So far, John Loeffler and I seem to agree: ASI is a double-edged sword. Loeffler and I share a fear of ASI, and both see that once it’s here, there’s no going back. An off-switch will probably not be available. Just think: where is the off-switch to the internet? An ASI will probably either be connected to the internet or find a way to convince someone to give it a connection, giving it the power to upload itself to other computers. If you think it can’t convince people to give it a connection, think again. Iterations of the AI-Box Experiment suggest that an ASI will be able to convince someone.

Where Loeffler and I disagree is with his deeply concerning argument to be optimistic about the advent of ASI:

“We have every reason to believe that in the end, an ASI will work to our benefit. Every technological advance has come at a cost, but human civilization has advanced because of it. In the end, human beings have a solid track record when it comes to technology.”

This is where Loeffler and I fundamentally disagree. Yes, we have a relatively good record when it comes to technology. But it has been a lot of trial and error. Mistakes have been made – a lot – and humanity has learned from these mistakes. Safety measures have often been put in place after accidents had already happened. For example, the first cars didn’t have airbags or even crumple zones. Also, the fact that no full-scale nuclear war has happened (yet) is at least in part due to luck. So far, humanity as a whole has gotten away with this lazy approach (although sadly, many have died), but we can not rely on this with ASI. We need to make sure an ASI is safe before we build it. Once the ASI is built, there is in all probability no going back. If it then turns out to be dangerous, humanity is done. We won’t have a chance to defend ourselves against a vastly superior intellect, just like chimpanzees couldn’t win a fight with us if we decided to kill all chimpanzees.

Conclusion

Artificial Superintelligence can be done right, and when done right, will help humanity thrive like never before. But we have to think carefully about how to do ASI before we build it. Since we don’t know when ASI will arrive, we have to think about this now.

Hein de Haan My name is Hein de Haan. An Artificial Intelligence expert, I am concerned with the future of humanity. As a result, I try to study as much as possible about many different topics in order to have a positive impact on society.

Leave a Reply

Your email address will not be published. Required fields are marked *